Technical Assessment

About

The Technical Assessment process provides a fact-based understanding of the current level of product knowledge, technical maturity, program status and technical risk by comparing assessment results against defined criteria. These assessment results enable a better understanding of the health and maturity of the program, giving the Program Manager (PM) a sound technical basis upon which to make program decisions.

Technical Assessment provides:

Timeline

Disciplined technical assessment activities begin early in a system’s life cycle. These activities begin by examining the status of development planning activities and efforts early in the program life. During system development, technical assessments provide a basis for tracking development of the system and lower-level system element designs. Disciplined technical assessments support the establishment of the various baselines and achievement of system verification. Technical assessment activities continue into manufacturing and production and continue through operations and support to support reliability growth and sustainment engineering efforts.

Role of the PM and SE

The PM and Systems Engineer evaluate technical maturity in support of program decisions at the key event-driven technical reviews and audits (see Systems Engineering (SE) Guidebook, Section 3 Technical Reviews and Audits) that occur throughout the acquisition life cycle. The PM and Systems Engineer use various measures and metrics, including Technical Performance Measures (TPM) and leading indicators, to gauge technical progress against planned goals, objectives and requirements. (See SE Guidebook, Section 4.1.3 Technical Assessment Process, Technical Performance Measures sub-section)

Technical assessments against agreed-upon measures enable data-driven decisions. Evidence-based evaluations that communicate progress and technical risk are essential for the PM to determine the need for revised program plans or technical risk mitigation actions throughout the acquisition life cycle.

Technical Assessment provides:

Technical Assessment Activities and Products

Timelines

The PM should ensure that technical assessments routinely occur throughout the life cycle on a reporting timeline that supports forecasting and timely resolution of risks -- informing decision makers of technical progress to plan and supporting the Earned Value Management System (EVMS). Some elements of technical assessments should be done on a monthly basis to inform programmatic attention, while other assessments may be quarterly or yearly. In all cases the assessment timelines should allow for tracking trends over time to show stability and impact of correction actions before major reviews and milestones. The PM should ensure that assessments are appropriately contracted, resourced and staffed, and include appropriate stakeholder and subject matter expert participation.

Products

Technical assessment products should form the basis of both the input criteria as well as the output of event-driven criteria for Technical reviews and audits (see SE Guidebook, Section 3 Technical Reviews and Audits). For example, percentage completion of documents/drawings could be entrance criteria for the review, and the output is an objective assessment of technical progress, maturity and risk.

Product Tasks
14-1-1: Develop and implement technical assessment metrics
  1. Identify appropriate technical measures and metrics to assess program health and technical progress.
  2. Document and update agreed-upon technical measures and metrics in the technical performance measures and metrics section of the program’s systems engineering plan.
  3. Incorporate appropriate technical measures and metrics reporting into the program’s contract(s) to obtain the required technical data from developers needed to assess program technical health and progress.
  4. Conduct analyses on the technical measures and metrics to determine risk and to develop risk mitigation strategies.
  5. Propose changes in the technical approach to address risk mitigation activities.
  6. Using the technical measures and metrics, conduct assessments of technical maturity, process health and stability, and risk to communicate progress to stakeholders and authorities at key decision points.

Source: AWQI eWorkbook

Activities of the PM

The PM should approve the Technical Assessment products for the program as part of three documents: (1) the performance measurement baseline (PMB) to capture time-phased measures against the Work Breakdown Structure (WBS) (see SE Guidebook, Section 4.1 Technical Planning Process); (2) a resource-allocated Integrated Master Schedule (IMS) (see SE Guidebook, Section 4.1 Technical Planning Process; and (3) the Systems Engineering Plan (see SE Guidebook, Section 1.5 Systems Engineering Plan) to govern the overall measures and metrics to be collected, update cycle, tasking, control thresholds and expected analysis.

Activities of the SE

The Systems Engineer assists the PM in planning and conducting the Technical Assessment process. This assistance may include advising on technical reviews and audits, defining the technical documentation and artifacts that serve as review criteria for each review/audit, and identifying technical performance measures and metrics. Specific activities should include:

Inputs and Outputs

Inputs to the Technical Assessment process should include approved program plans (e.g., Systems Engineering Plan, Cybersecurity Strategy (CSS), Acquisition Strategy (AS), Acquisition Program Baseline (APB), engineering products (i.e., TPMs, drawings, specifications and reports, prototypes, system elements and engineering development modules), and current performance metrics. Outputs may include various reports and findings (e.g., technical review reports, corrective actions, Program Support Assessment (PSA) findings or test reports).

Technology Readiness Assessments (TRA)

A TRA is a systematic, metrics-based technical assessment process that assesses the maturity of, and the risk associated with, critical technologies to be used in MDAPs. It is conducted by the PM with the assistance of an independent team of SMEs.

PMs of MDAPs should conduct knowledge-building TRAs throughout the DoD acquisition life cycle, including at PDR, CDR, and Milestone C. These assessments should include the reassessment of all elements of the system design to identify any new critical technology elements and their associated technology readiness levels as a result of any system design changes or new knowledge obtained during the engineering and manufacturing development phase. See the Engineering of Defense Systems Guidebook and the DoD Technology Readiness Assessment (TRA) Guidance for additional information.

Technical Performance Measures (TPM)

Technical performance measures and metrics (TPMs) are the method of collecting and providing information to Program Managers (PM) and Systems Engineers at routine intervals for decision making. Metrics are measures collected over time for the purpose of seeing trends and forecasting program progress to plan. TPMs encompass the quantifiable attributes of both the system’s development processes and status, as well as the system’s product performance and maturity. Early in the life cycle the TPMs may be estimated based on numerous assumptions and modeling and simulation. As the life cycle proceeds, actual demonstrated data replaces estimates and adds to the fidelity of the information. The insight gained can be at any level: the entire system, sub- system elements, enabling system elements, and other contributing mission (e.g. SoS) elements, as well as all of the SE processes and SE disciplines in use across the program.

The goal of having a robust TPM process is the ability for the PM, Systems Engineer and senior decision makers to: (1) gain quantifiable insight to technical progress, trends and risks; (2) empirically forecast the impact on program cost, schedule, and performance; and (3) provide measurable feedback of changes made to program planning or execution to mitigate potentially unfavorable outcomes. Additionally, if sufficient level of margin exists, then TPMs help identify trade space and can be used by PMs to balance cost, schedule and performance throughout the life cycle. The PM and SE should use TPM data as the basis of evidence to support entrance/exit criteria, incentives and direction given at technical reviews or milestone decisions. TPMs provide leading indicators of performance deficiencies or system risk.

Activities and Products

TPMs should be identified, tailored and updated in the SEP to fit the acquisition phase of the program. As the program progresses through the acquisition cycle TPMs should be added, updated or deleted. TPMs should be chosen that will both confirm the performance of the program in the current phase, but also provide leading indicators to future risk and issues in the next phase. In early phases of a program (e.g., Pre-Milestone A), a program should document a strategy for identifying, prioritizing and selecting TPMs. As the program matures, the program should document in a SEP the actual TPMs to be used. Further TPM guidance is provided in the SEP outline.

TPM Categories and Definitions

Although the specific TPMs used to monitor a program are unique to that program, there are 15 categories that are of concern within the Department across all DoD acquisition programs. Having TPMs in each of these core categories is considered best practice for effective technical management. For each of the categories in SE Guidebook Table 4-2, the PM and Systems Engineer should consider at least one TPM to address product and process performance. For some categories, such as “System Performance,” there should be multiple TPMs to monitor forecasted performance of each KPP and each KSA. This specific set of TPM’s relate to the test community use of Critical Technical Parameters (CTPs) and should be identified as such to focus the test community. The traceability of the TPMs to the core categories should be documented in the SEP. SE Guidebook Table 4-2 addresses the organization of the core TPM categories as well as their definitions.

Table 4-2. Core Technical Performance Measure Category Definitions

Core Technical Performance MEasure (TPM) Category Description of TPM
Mission Integration Management (System of Systems (SoS) Integration /Interoperability) Metrics evaluate the stability, maturity and adequacy of external interfaces to understand the risks from/to other programs integrating with the program toward providing the required capability, on-time and within budget. Understand the growth, change and correctness of the definition of external and internal interfaces. Evaluate the integration risks based on the interface maturity. (See SE Guidebook Section 5.2.5. Integration and Section 6.12. Interoperability and Dependencies).
Mission (End-to-End) Performance Measure of the overall ability of a system to accomplish a mission when used by representative personnel in the environment planned in conjunction with external systems. Metrics should provide an understanding of the projected performance regarding a mission thread achieving the intended mission capability. These may relate to the Critical Operational Issues, criteria, and measures of effectiveness in the operational test agencies.
Reliability, Availability and Maintainability (RAM) Metrics should evaluate the requirements imposed on the system to ensure operationally ready for use when needed, will successfully perform assigned functions and can be economically operated and maintained within the scope of logistics concepts and policies. (See Section 5.18 Reliability and Maintainability Engineering).
System Performance Metrics should evaluate the performance of the system or subsystem elements in achieving critical technical attributes (e.g., weight) that contribute to meeting system requirements. There should be multiple TPMs to monitor forecasted performance of Key Performance Parameters and Key System Attributes. These are called Critical Technical Parameters (CTPs) by the test community.
Program Protection System assurance evaluates the safeguarding of the system and the technical data anywhere in the acquisition process, including the technologies being developed, the support systems (e.g., test and simulation equipment) and research data with military applications.
Cybersecurity Include metrics to evaluate Defense in Depth application and techniques for the detect, protect and react paradigm; and controls performance of Security Technical Implementation Guides.
Manufacturing Management Metrics should evaluate the extent to which the product can be manufactured with relative ease at minimum cost and schedule; and maximum reliability. (See Section 5.14 Manufacturing and Quality).
Manufacturing Quality System manufacturing quality metrics should track both quality of conformance and quality of design. Quality of conformances is the effectiveness of the design and manufacturing functions in executing the product manufacturing requirements and process specifications while meeting tolerances, process control limits and target yields for a given product group (e.g., defects per quantity produced). (See Section 5.14 Manufacturing and Quality).
Schedule Management Include metrics to assess both schedule health (e.g., the Defense Contract Management Agency 14-point health check), associated completeness of the Work Breakdown Structure and the risk register. A healthy, complete and risk-enabled schedule forms the technical basis for the Earned Value Management System (EVMS). Strong schedule metrics are paramount for accurate EVMS data.
Staffing and Personnel Management Metrics should evaluate the adequacy of the effort, skills, experience and quantity of personnel assigned to the program to meet management objectives throughout the acquisition life cycle.
Resource Management Metrics should evaluate the adequacy of resources and/or tools (e.g., models, simulations, automated tools, synthetic environments) to support the schedule. See also Table 5-7: Product Support Considerations.
Software Development Management Metrics should evaluate software development progress against the software development plan. For example, the rate of code generation (lines of code per man-hour). (See Section 2.2.4 Software Engineering).
Software Quality Metrics should address software technical performance and quality (e.g., defects, rework) evaluating the software’s ability to meet user needs. (See Section 2.2.4 Software Engineering).
Requirements Management Evaluate the stability and adequacy of the requirements to provide the required capability, on-time and within budget. Includes the growth, change, completeness and correctness of system requirements. (See Section 4.1.4 Requirements Management Process).
Risk Management Metrics should include the number of risks open over time or an aggregate of risk exposure (the potential impact to the performance, cost and schedule). (See Section 4.1.5 Risk Management Process).
Test Management Metrics should include measures of the stability of the verification and validation process (e.g., number of test points, development of test vignettes and test readiness).

TPM Hierarchy

As shown in Figure 4-5, TPMs at the Management Decisional level may be allocated or decomposed into supporting details associated with subsystem assemblies along the lines of the WBS and/or organizational management hierarchies. As examples: a system weight TPM may be allocated to separate subsystem assemblies or a software productivity TPM may be added to effectively manage a high-risk subcontractor’s development efforts.

Figure 4-5 TPM Hierarchy

Figure 4-5: TPM Hierarchy

TPM Characteristics

The figure below [source: Figure 4-6, SE Guidebook] depicts the characteristics of a properly defined and monitored TPM to provide early detection or prediction of problems that require management. TPM reporting should be in terms of actual versus planned progress, plotted as a function of time and aligned with key points in the program schedule (e.g., technical reviews). A continuous (historical) plot of planned and actual values for each TPM, along with program planning information, enables assessment of performance trends (i.e., progress-to-plan relationships with respect to both objective and threshold values). As illustrated in the figure, there are four attributes of a good metric:

Example of a well thought out TPM

Figure 4-6: Leading Indicators Influence Risk Mitigation Planning

To achieve an accurate status, TPM reporting should account for uncertainties such as measurement error and the immaturity of the item being measured. Allotted values for these uncertainties are termed “Contingency” and are used to adjust the Current Best Estimate to arrive at a Worst Case Estimate (WCE) for purposes of comparison against the planned profile, Thresholds and Goals. For example, if a surrogate item is used to determine a measured value, it would warrant a greater contingency factored into the WCE than if the actual end item were used. Contingency is allocated as part of each WCE data point and typically decreases as the system and measurements mature, while Margin is not allocated. “Margin” is the amount of growth that can be accommodated while still remaining within the threshold (the remainder of Threshold minus WCE). Margin is potential trade space available to the PM to potentially offset under-achieving measures. SE Guidebook Figure 4-7 depicts the relationship between Contingency, CBE, WCE, Threshold and Margin, as well as example criteria of how contingency changes as the system/testing matures.

Example of a well thought out TPM

Figure 4-7 TPM Contingency Definitions


Resources

Key Terms

Source:
DAU Glossary
DAU ACQuipedia


Policy and Guidance

DAU Training Courses

DAU Media

DAU Tools

On this page

  1. About
  2. Purpose
  3. Timeline
  4. Role of the PM and SE
  5. Activities and Products
  6. Technical Performance Measures
  7. Resources
Back to top